1 |
Towards Unsupervised Content Disentanglement in Sentence Representations via Syntactic Roles
|
|
|
|
In: CtrlGen: Controllable Generative Modeling in Language and Vision ; https://hal.inria.fr/hal-03540084 ; CtrlGen: Controllable Generative Modeling in Language and Vision, Jan 2022, virtual, France (2022)
|
|
BASE
|
|
Show details
|
|
2 |
Can Character-based Language Models Improve Downstream Task Performance in Low-Resource and Noisy Language Scenarios?
|
|
|
|
In: Seventh Workshop on Noisy User-generated Text (W-NUT 2021, colocated with EMNLP 2021) ; https://hal.inria.fr/hal-03527328 ; Seventh Workshop on Noisy User-generated Text (W-NUT 2021, colocated with EMNLP 2021), Jan 2022, punta cana, Dominican Republic ; https://aclanthology.org/2021.wnut-1.47/ (2022)
|
|
BASE
|
|
Show details
|
|
3 |
First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT
|
|
|
|
In: https://hal.inria.fr/hal-03161685 ; 2021 (2021)
|
|
BASE
|
|
Show details
|
|
4 |
Can Multilingual Language Models Transfer to an Unseen Dialect? A Case Study on North African Arabizi
|
|
|
|
In: https://hal.inria.fr/hal-03161677 ; 2021 (2021)
|
|
BASE
|
|
Show details
|
|
5 |
First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT
|
|
|
|
In: EACL 2021 - The 16th Conference of the European Chapter of the Association for Computational Linguistics ; https://hal.inria.fr/hal-03239087 ; EACL 2021 - The 16th Conference of the European Chapter of the Association for Computational Linguistics, Apr 2021, Kyiv / Virtual, Ukraine ; https://2021.eacl.org/ (2021)
|
|
BASE
|
|
Show details
|
|
6 |
When Being Unseen from mBERT is just the Beginning: Handling New Languages With Multilingual Language Models
|
|
|
|
In: NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies ; https://hal.inria.fr/hal-03251105 ; NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Jun 2021, Mexico City, Mexico (2021)
|
|
BASE
|
|
Show details
|
|
7 |
PAGnol: An Extra-Large French Generative Model
|
|
|
|
In: https://hal.inria.fr/hal-03540159 ; [Research Report] LightON. 2021 (2021)
|
|
BASE
|
|
Show details
|
|
8 |
Synthetic Data Augmentation for Zero-Shot Cross-Lingual Question Answering
|
|
|
|
In: https://hal.inria.fr/hal-03109187 ; 2021 (2021)
|
|
BASE
|
|
Show details
|
|
9 |
Noisy UGC Translation at the Character Level: Revisiting Open-Vocabulary Capabilities and Robustness of Char-Based Models
|
|
|
|
In: W-NUT 2021 - 7th Workshop on Noisy User-generated Text (colocated with EMNLP 2021) ; https://hal.inria.fr/hal-03540174 ; W-NUT 2021 - 7th Workshop on Noisy User-generated Text (colocated with EMNLP 2021), Association for computational linguistics, Nov 2021, Punta Cana, Dominican Republic (2021)
|
|
BASE
|
|
Show details
|
|
10 |
Understanding the Impact of UGC Specificities on Translation Quality
|
|
|
|
In: W-NUT 2021 - Seventh Workshop on Noisy User-generated Text (colocated with EMNLP 2021) ; https://hal.inria.fr/hal-03540175 ; W-NUT 2021 - Seventh Workshop on Noisy User-generated Text (colocated with EMNLP 2021), association for computational linguistics, Nov 2021, Punta Cana, Dominican Republic (2021)
|
|
BASE
|
|
Show details
|
|
11 |
Challenging the Semi-Supervised VAE Framework for Text Classification
|
|
|
|
In: Second Workshop on Insights from Negative Results in NLP (colocated with EMNLP) ; https://hal.inria.fr/hal-03540081 ; Second Workshop on Insights from Negative Results in NLP (colocated with EMNLP), Nov 2021, Punta Cana, Dominican Republic ; https://insights-workshop.github.io/2021/ (2021)
|
|
Abstract:
Accepted at the EMNLP 2021 Workshop on Insights from Negative Results ; International audience ; Semi-Supervised Variational Autoencoders (SSVAEs) are widely used models for data efficient learning. In this paper, we question the adequacy of the standard design of sequence SSVAEs for the task of text classification as we exhibit two sources of overcomplexity for which we provide simplifications. These simplifications to SSVAEs preserve their theoretical soundness while providing a number of practical advantages in the semi-supervised setup where the result of training is a text classifier. These simplifications are the removal of (i) the Kullback-Liebler divergence from its objective and (ii) the fully unobserved latent variable from its probabilistic model. These changes relieve users from choosing a prior for their latent variables, make the model smaller and faster, and allow for a better flow of information into the latent variables. We compare the simplified versions to standard SSVAEs on 4 text classification tasks. On top of the above-mentioned simplification, experiments show a speed-up of 26%, while keeping equivalent classification scores. The code to reproduce our experiments is public.
|
|
Keyword:
[INFO.INFO-TT]Computer Science [cs]/Document and Text Processing
|
|
URL: https://hal.inria.fr/hal-03540081
|
|
BASE
|
|
Hide details
|
|
17 |
Can Character-based Language Models Improve Downstream Task Performance in Low-Resource and Noisy Language Scenarios? ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
First Align, then Predict: Understanding the Cross-Lingual Ability of Multilingual BERT ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Synthetic Data Augmentation for Zero-Shot Cross-Lingual Question Answering ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|